Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Reflective cross-site scripting vulnerability detection based on fuzzing test
NI Ping, CHEN Wei
Journal of Computer Applications    2021, 41 (9): 2594-2601.   DOI: 10.11772/j.issn.1001-9081.2020111770
Abstract318)      PDF (1152KB)(468)       Save
In view of the low efficiency, high false negative rate and high false positive rate of Cross-Site Scripting (XSS) vulnerability detection technology in current World Wide Web (WWW) applications, a reflective XSS vulnerability detection system based on fuzzing test was proposed. First, the Web crawler technology was used to crawl the Web page links with specified depth in the whole website and analyze them, so as to extract the potential user injection points. Secondly, a fuzzing test case was constructed according to the grammatical form of the attack payload, and an initial weights was set for each element, according to the injected probe vector, the output point type was obtained to select the corresponding attack grammatical form for constructing potential attack payload, and it was mutated to form a mutated attack payload as the request parameter. Thirdly, the website response was analyzed and the weights of the elements were adjusted to generate a more efficient attack payload. Finally, this proposed system was compared horizontally with OWASP Zed Attack Proxy (ZAP) and Wapiti systems. Experimental results show that the number of potential user injection points found by the proposed system is increased by more than 12.5%, the false positive rate of the system is dropped to 0.37%, and the false negative rate of the system is lower than 2.23%. At the same time, this system reduces the number of requests and saves the detection time.
Reference | Related Articles | Metrics
E-forensics model for internet of vehicles based on blockchain
CHEN Weiwei, CAO Li, GU Xiang
Journal of Computer Applications    2021, 41 (7): 1989-1995.   DOI: 10.11772/j.issn.1001-9081.2020081205
Abstract353)      PDF (1260KB)(426)       Save
To resolve the difficulties of forensics and determination of responsibility for traffic accidents, a blockchain-based e-forensics scheme under Internet Of Vehicles (IOV) communications architecture was proposed. In this scheme, the remote storage of digital evidence was implemented by using the decentralized storage mechanism of blockchain, and the fast retrieval of digital evidence and effective tracing of related evidence chain were realized by using the smart contracts. The access control of data was performed by using the token mechanism to protect the privacy of vehicle identities. Meanwhile, a new consensus mechanism was proposed to meet real-time requirements of IOV for forensics. Simulation results show that the new consensus algorithm in this proposed scheme has higher efficiency compared with the traditional Delegated Proof Of Stake (DPOS) consensus algorithm and the speed of forensics meets the requirements of IOV environment, which ensures the characteristics of electronic evidence such as non-tampering, non-repudiation and permanent preservation, so as to realize the application of blockchain technology in judicial forensics.
Reference | Related Articles | Metrics
Abnormal flow detection based on improved one-dimensional convolutional neural network
HANG Mengxin, CHEN Wei, ZHANG Renjie
Journal of Computer Applications    2021, 41 (2): 433-440.   DOI: 10.11772/j.issn.1001-9081.2020050734
Abstract674)      PDF (1011KB)(713)       Save
In order to solve the problems that traditional machine learning based abnormal flow detection methods rely heavily on features, and the detection methods based on deep learning are inefficient and easy to overfit, an abnormal flow detection method based on Improved one-Dimentional Convolutional Neural Network (ICNN-1D) was proposed, namely AFM-ICNN-1D. Different from "convolution-pooling-full connection" structure of the traditional CNN, the ICNN-1D is mainly composed of 2 convolutional layers, 2 global pooling layers, 1 dropout layer and 1 fully connected output layer. The preprocessed data were put into ICNN-1D, and the result after two convolutional layers was used as the input of the global average pooling layer and the global maximum pooling layer, then the obtained output data were merged and sent to the fully connected layer to classify. The model was optimized according to the classification result and the real dataset, then it was used to the abnormal flow detection. The experimental results on the CIC-IDS-2017 dataset showed that the accuracy and recall rate of AFM-ICNN-1D reached 98%, which is better than that of the comparative k-Nearest Neighbor (kNN) and Random Forest (RF) methods. Moreover, compared with traditional CNN, the model parameters were reduced by about 97%, and the training time was shortened by about 40%. Experimental results show that AFM-ICNN-1D has high detection performance, which can reduce training time and avoid over fitting with better retaining the local characteristics of traffic data.
Reference | Related Articles | Metrics
Adaptive color mapping and its application in void evolution visualization
QIAO Jiewen, CHEN Wei
Journal of Computer Applications    2020, 40 (6): 1783-1792.   DOI: 10.11772/j.issn.1001-9081.2019111889
Abstract297)      PDF (3650KB)(289)       Save
In order to improve the visualization effect of the void evolution of materials, an adaptive color mapping method based on data characteristics was proposed. Firstly, a number of control points were selected in the CIELAB color space to form an initial color path. Then, based on the proportion of the data characteristic values, the positions of control points were optimized and the color path was adjusted according to the constraints such as uniformity of color difference and consistency of brightness, so as to meet the requirement of control points following the data adaptively. Finally, the distribution of the perceptual difference sum was remapped by the equalization algorithm, and the perceptual uniformity of the color mapping was optimized to form the final color map. The experimental results show that, compared with traditional color mapping methods which only consider the color space and ignore the diversity of data, the proposed adaptive color mapping method has better identifiability of the characteristics of visualization results by fully considering the color proportion, the number of control points and self-adaptation, and guarantees the perceptual uniformity of the visualization results of the void evolution, improving the accuracy of visualization results and reduces the time required to observe effective information.
Reference | Related Articles | Metrics
Fast convergence average TimeSynch algorithm for apron sensor network
CHEN Weixing, LIU Qingtao, SUN Xixi, CHEN Bin
Journal of Computer Applications    2020, 40 (11): 3407-3412.   DOI: 10.11772/j.issn.1001-9081.2020030290
Abstract322)      PDF (665KB)(242)       Save
The traditional Average TimeSynch (ATS) for APron Sensor Network (APSN) has slow convergence and low algorithm efficiency due to its distributed iteration characteristics, based on the principle that the algebraic connectivity affects the convergence speed of the consensus algorithm, a Fast Convergence Average TimeSynch (FCATS) was proposed. Firstly, the virtual link was added between the two-hop neighbor nodes in APSN to increase the network connectivity. Then, the relative clock skew, logical clock skew and offset of the node were updated based on the information of the single-hop and two-hop neighbor nodes. Finally, according to the clock parameter update process, the consensus iteration was performed. The simulation results show that FCATS can be converged after the consensus iteration. Compared with ATS, it has the convergence speed increased by about 50%. And under different topological conditions, the convergence speed of it can be increased by more than 20%. It can be seen that the convergence speed is significantly improved.
Reference | Related Articles | Metrics
Blockchain based efficient anonymous authentication scheme for IOV
CHEN Weiwei, CAO Li, SHAO Changhong
Journal of Computer Applications    2020, 40 (10): 2992-2999.   DOI: 10.11772/j.issn.1001-9081.2020020211
Abstract643)      PDF (1304KB)(1225)       Save
In order to solve the problems of low efficiency of centralized authentication and poor privacy protection in Internet of Vehicles (IOV), an efficient anonymous authentication scheme based on blockchain technology was proposed. According to the IOV's characteristics of openness, self-organization and fast movement, the tamper-proof and distributed features of blockchain technology were used to realize the generation and blockchain storing of temporary identities of the vehicles. Smart contract was implemented to make efficient anonymous two-way identity authentication while vehicles communicated with each other. Experimental results show that, in terms of authentication efficiency, the proposed scheme has the anonymous authentication with slower time delay growth and higher efficiency compared with traditional Public Key Infrastructure (PKI) authentication and identity authentication scheme with pseudonym authorization; in terms of safety performance, the temporary identity stored in the blockchain has characteristics of non-tampering, nondenying and traceability. In this scheme, the malicious vehicle identity and authority can be traced back and controlled respectively, and the public-key cryptography and digital signature technology ensure the confidentiality and integrity of communication data.
Reference | Related Articles | Metrics
Gamification design and effect analysis of color education
LYU Ruimin, YANG Fan, LU Jing, CHEN Wei
Journal of Computer Applications    2019, 39 (8): 2456-2461.   DOI: 10.11772/j.issn.1001-9081.2019010106
Abstract467)      PDF (875KB)(228)       Save
Current research generally focuses on the application of gamification to improve the engagement of learning. However, the research on gamification in specific fields such as color education is not sufficient, and there is a lack of analysis on the gamification elements and influence factors of learning effects. For these problems, a game model for training color recognition was designed. Firstly, two different ways of playing were designed with same core gameplay but different interaction modes. Then, the same virtual reward was added in both playing ways. Finally, the effects of two playing ways on learning effect with or without virtual reward were compared, and the effect of virtual reward in the same playing way were compared. The results show that gameplay design mainly affects learning efficiency, and virtual reward mainly affects engagement.
Reference | Related Articles | Metrics
Single precision floating general matrix multiply optimization for machine translation based on ARMv8 architecture
GONG Mingqing, YE Huang, ZHANG Jian, LU Xingjing, CHEN Wei
Journal of Computer Applications    2019, 39 (6): 1557-1562.   DOI: 10.11772/j.issn.1001-9081.2018122608
Abstract700)      PDF (1002KB)(555)       Save
Aiming at the inefficiency of neural network inferential calculation executed by mobile intelligent devices using ARM processor, a set of Single precision floating GEneral Matrix Multiply (SGEMM) algorithm optimization scheme based on ARMv8 architecture was proposed. Firstly, it was determined that the computational efficiency of the processor based on ARMv8 architecture executing SGEMM algorithm was limited by the vectorized computation unit usage scheme, the instruction pipeline, and the probability of occurrence of cache miss. Secondly, three optimization techniques:vector instruction inline assembly, data rearrangement and data prefetching were implemented for the three reasons that the computational efficiency was limited. Finally, the test experiments were designed based on three matrix patterns commonly used in the neural network of speech direction and the programs were run on the RK3399 hardware platform. The experimental results show that, the single-core computing speed is 10.23 GFLOPS in square matrix mode, reaching 78.2% of the measured floating-point peak value; the single-core computing speed is 6.35 GFLOPS in slender matrix mode, reaching 48.1% of the measured floating-point peak value; and the single-core computing speed is 2.53 GFLOPS in continuous small matrix mode, reaching 19.2% of the measured floating-point peak value. With the optimized SGEMM algorithm deployed into the speech recognition neural network program, the actual speech recognition speed of program is significantly improved.
Reference | Related Articles | Metrics
Mechanism of trusted storage in Ethereum based on smart contract
CAO Didi, CHEN Wei
Journal of Computer Applications    2019, 39 (4): 1073-1080.   DOI: 10.11772/j.issn.1001-9081.2018092005
Abstract1637)      PDF (1333KB)(606)       Save
Aiming at the problem that Ethereum platporm has simple data management function and poor performance of low throughput and high latency, a trusted storage mechanism based on smart contract in Ethereum was proposed. Firstly, a framework of trusted storage based on smart contract was proposed for solving data management problem exposed in Ethereum. Secondly, the framework and implementation of the proposed mechanism were expounded from the aspects of centralized data processing, authenticated data distributed storage and dynamic forensics. Finally, the feasibility of the mechanism was proved by the system development based on smart contract. The experimental and analysis results show that compared with the traditional relational database storage, the proposed method increases processing credibility, storage credibility and access credibility; compared with blockchain storage, it enriches data management function, reduces the cost of block storage and improves the efficiency of storage.
Reference | Related Articles | Metrics
Contextual authentication method based on device fingerprint of Internet of Things
DU Junxiong, CHEN Wei, LI Xueyan
Journal of Computer Applications    2019, 39 (2): 464-469.   DOI: 10.11772/j.issn.1001-9081.2018081955
Abstract420)      PDF (1014KB)(322)       Save
Aiming at the security problem of remote control brought by illegal device access in Internet of Things (IoT), a contextual authentication method based on device fingerprint was proposed. Firstly, the fingerprint of IoT device was extracted by a proposed single byte analysis method in the interaction traffic. Secondly, the process framework of the authentication was proposed, and the identity authentication was performed according to six contextual factors including device fingerprint. Finally, in the experiments on IoT devices, relevant device fingerprint features were extracted and decision tree classification algorithms were combined to verify the feasibility of contextual authentication method. Experimental results show that the classification accuracy of the proposed method is 90%, and the 10% false negative situations are special cases but also meet the certification requirements. The results show that the contextual authentication method based on the fingerprint of IoT devices can ensure that only trusted IoT terminal equipment access the network.
Reference | Related Articles | Metrics
Defense strategy against browser cache pollution
DAI Chengrui, CHEN Wei
Journal of Computer Applications    2018, 38 (3): 693-698.   DOI: 10.11772/j.issn.1001-9081.2017082139
Abstract460)      PDF (1095KB)(400)       Save
Browser cache is mainly used to speed up the user's request for network resources, however, an attacker can implement cache pollution attack via man-in-the-middle attacks. The general defense strategies against browser cache pollution cannot cover different types of network attack, therefore, a controllable browser cache pollution defense strategy was proposed. The proposed strategy was deployed between the client and the server. The strategy includes random number judgement, request-response delay judgement, resource representation judgement, hash verification and crowdsourcing strategy, by which the browser cache pollution problems were effectively defended. 200 JavaScript resource files were selected as experiment samples and 100 of them were polluted via man-in-the-middle attack. By accessing these resources, defense scripts were enabled to analyze the detection rate of contaminated samples and the false positive rate of normal samples. The experimental results show that under the loose conditions, the hit rate of contaminated samples reaches 87% and false positive rate of normal samples is 0%; while under the strict conditions, the hit rate of contaminated sample reaches 95% and false positive rate of normal samples is 4%. At the same time, the request response time difference of all experimental samples is 5277ms and 6013ms respectively, which are both less than the time difference of reloading all the resources. The proposed strategy defends most of the polluted resources and shortens the time of user access. The strategy simplifies the process of cache pollution prevention, and also makes tradeoff between the security and usability with different parameters to satisfy different users.
Reference | Related Articles | Metrics
Anti-collision algorithm for RFID based on counter and bi-slot
MO Lei, CHEN Wei, REN Ju
Journal of Computer Applications    2017, 37 (8): 2168-2172.   DOI: 10.11772/j.issn.1001-9081.2017.08.2168
Abstract525)      PDF (831KB)(334)       Save
Focusing on the problem of the binary search anti-collision algorithm in Radio Frequency IDentification (RFID) system such as many search times and large amount of communication data, a new anti-collision algorithm for RFID with counter and bi-slot was proposed based on regressive search tree algorithm and time slot algorithm, namely CBS. The tags were searched step by step according to the slot counter in tag and the collision bit information received by reader. The response tags were divided into two groups, which returned the data information to the reader in two time slots. The reader only sends the information of the highest collision bit position, and the tags only send the bits of data after the highest collision bit. Theoretical analysis and simulation results showed that compared with the traditional Regressive Binary Search (RBS) algorithm, the search times of CBS algorithm was reduced by more than 51%, and the communication data was reduced by more than 65%. CBS algorithm is superior to the commonly used anti-collision algorithms, which greatly reduces the search times and communication data, and improves the search efficiency.
Reference | Related Articles | Metrics
Challenges and recent progress in big data visualization
CUI Di, GUO Xiaoyan, CHEN Wei
Journal of Computer Applications    2017, 37 (7): 2044-2049.   DOI: 10.11772/j.issn.1001-9081.2017.07.2044
Abstract816)      PDF (1184KB)(944)       Save
The advent of big data era elicits the importance of visualization. As an import data analysis method, visual analytics explores the cognitive ability and advantages of human beings, integrates the abilities of human and computer, and gains insights into big data with human-computer interaction. In view of the characteristics of large amount of data, high dimension, multi-source and multi-form, the visualization method of large scale data was discussed firstly: 1) divide and rule principle was used to divide big problem into a number of smaller tasks, and parallel processing was used to improve the processing speed; 2) the means of aggregation, sampling and multi-resolution express were used to reduce data; 3) multi-view was used to present high dimensional data. Then, the visualization process of flow data was discussed for the two types of flow data, which were monitoring and superposition. Finally, the visualization of unstructured data and heterogeneous data was described. In a word, the visualization could make up for the disadvantages and shortcomings of computer automatic analysis, integrate computer analysis ability and human perception of information, and find the information and wisdom behind big data effectively. However, the research results of this theory are very limited, and it is faced with the challenge of large scale, dynamic change, high dimension and multi-source heterogeneity, which are becoming the hot spot and direction of large data visualization research in the future.
Reference | Related Articles | Metrics
Design of DDR3 protocol parsing logic based on FPGA
TAN Haiqing, CHEN Zhengguo, CHEN Wei, XIAO Nong
Journal of Computer Applications    2017, 37 (5): 1223-1228.   DOI: 10.11772/j.issn.1001-9081.2017.05.1223
Abstract743)      PDF (1133KB)(581)       Save
Since the new generation of flash-based SSD (Solid-State Drivers) use the DDR3 interface as its interface, SSD must communicate with memory controller correctly. FPGA (Field-Programmable Gate Array) was used to design the DDR3 protocol parsing logic. Firstly, the working principle of DDR3 was introduced to understand the controlling mechanism of memory controller. Next, the architecture of this interface parsing logic was designed, and the key technical points, including clock, writing leveling, delay controlling, interface synchronous controlling were designed by FPGA. Last, the validity and feasibility of the proposed design were proved by the modelsim simulation result and board level validation. In terms of performance, through the test of single data, continuous data and mixed read and write data, the bandwidth utilization of DDR3 interface is up to 77.81%. As the test result shows, the design of DDR3 parsing logic can improve the access performance of storage system.
Reference | Related Articles | Metrics
User classification method based on multiple-layer network traffic analysis
MU Tao, CHEN Wei, CHEN Songjian
Journal of Computer Applications    2017, 37 (3): 705-710.   DOI: 10.11772/j.issn.1001-9081.2017.03.705
Abstract1029)      PDF (1121KB)(557)       Save
Accurate classification of users plays an important role in improving the quality of customized services, but for privacy considerations users, often do not meet the network service providers, refusing to provide personal information, such as location information, hobbies and so on. To solve this problem, by analyzing the multi-layer network traffic such as network layer and application layer under the premise of protecting user privacy, and then using machine learning methods such as K-means clustering and random forest algorithm to predict the user's geographic location types (such as apartments, campuses, etc.) and hobbies, and the relationship between geographic location types and the user interests was analyzed to improve the accuracy of user classification. The experimental results show that the proposed scheme can adaptively partition the user types and geographic location types, and improve the accuracy of user behavior analysis by correlating the user's geographic location type and the user type.
Reference | Related Articles | Metrics
Image super-resolution reconstruction combined with compressed sensing and nonlocal information
CHEN Weiye, SUN Quansen
Journal of Computer Applications    2016, 36 (9): 2570-2575.   DOI: 10.11772/j.issn.1001-9081.2016.09.2570
Abstract554)      PDF (950KB)(330)       Save
The existing super-resolution reconstruction algorithms only consider the gray information of image patches, but ignores the texture information, and most nonlocal methods emphasize the nonlocal information without considering the local information. In view of these disadvantages, an image super-resolution reconstruction algorithm combined with compressed sensing and nonlocal information was proposed. Firstly, the similarity between pixels was calculated according to the structural features of image patches, and both the gray and the texture information was considered. Then, the weight of similar pixels was evaluated by merging the local and nonlocal information, and a regularization term combining the local and nonlocal information was constructed. Finally, the nonlocal information was introduced into the compressed sensing framework, and the sparse representation coefficients were solved by the iterative shrinkage algorithm. Experimental results demonstrate that the proposed algorithm outperforms other learning-based algorithms in terms of improved Peak Signal-to-Noise Ratio and Structural Similarity, and it can better recover the fine textures and effectively suppress the noise.
Reference | Related Articles | Metrics
Performance optimization of wireless network based on canonical causal inference algorithm
HAO Zhifeng, CHEN Wei, CAI Ruichu, HUANG Ruihui, WEN Wen, WANG Lijuan
Journal of Computer Applications    2016, 36 (8): 2114-2120.   DOI: 10.11772/j.issn.1001-9081.2016.08.2114
Abstract612)      PDF (1089KB)(589)       Save
The existing wireless network performance optimization methods are mainly based on the correlation analysis between indicators, and cannot effectively guide the design of optimization strategies and some other interventions. Thus, a Canonical Causal Inference (CCI) algorithm was proposed and used for wireless network performance optimization. Firstly, concerning that wireless network performance is usually presented by numerous correlated indicators, the Canonical Correlation Analysis (CCA) method was employed to extract atomic events from indicators. Then, typical causal inference method was conducted on the extracted atomic events to find the causality among the atomic events. The above two stages were iterated to determine the causal network of the atomic events and provided a robust and effective basis for wireless network performance optimization. The validity of CCI was indicated by simulation experiments, and some valuable causal relations of wireless network indicators were found on the data of a city's more than 30000 mobile base stations.
Reference | Related Articles | Metrics
Cloud service QoS prediction method based on Bayesian model
CHEN Wei, CHEN Jiming
Journal of Computer Applications    2016, 36 (4): 914-917.   DOI: 10.11772/j.issn.1001-9081.2016.04.0914
Abstract533)      PDF (698KB)(489)       Save
For Quality of Service (QoS) guarantee of cloud service areas, a cloud service QoS prediction method based on time series prediction was proposed to select an appropriate cloud service which met QoS requirements of cloud user and perceive QoS violation may occur. The improved Bayesian constant mean model was used to predict QoS of cloud service accurately. In the experiment, a Hadoop system was established to simulate cloud computing and a lot of QoS data of response time and throughput were collected as predicted object. The experimental result shows that compared with Bayesian constant mean discount model and Autoregressive Integrated Moving Average (ARIMA) model, the proposed prediction method based on improved Bayesian constant mean model is one order of magnitude smaller than the previous methods in Square Sum Error (SSE), Mean Absolute Error (MAE), Mean Squared Error (MSE) and Mean Absolut Percentage Error (MAPE), so it has higher accuracy; and the comparison of prediction accuracy illustrate that the proposed method also has better fitting effect.
Reference | Related Articles | Metrics
Recognition of Chinese news event correlation based on grey relational analysis
LIU Panpan, HONG Xudong, GUO Jianyi, YU Zhengtao, WEN Yonghua, CHEN Wei
Journal of Computer Applications    2016, 36 (2): 408-413.   DOI: 10.11772/j.issn.1001-9081.2016.02.0408
Abstract407)      PDF (895KB)(883)       Save
Concerning the low accuracy of identifying relevant Chinese events, a correlation recognition algorithm for Chinese news events based on Grey Relational Analysis (GRA) was proposed, which is a multiple factor analysis method. Firstly, three factors that affect the event correlation, including co-occurrence of triggers, shared nouns between events and the similarity of the event sentences, were proposed through analyzing the characteristics of Chinese news events. Secondly, the three factors were quantified and the influence weights of them were calculated. Finally, GRA was used to combine the three factors, and the GRA model between events was established to realize event correlation recognition. The experimental results show that the three factors for event correlation recognition are effective, and compared with the method only using one influence factor, the proposed algorithm improves the accuracy of event correlation recognition.
Reference | Related Articles | Metrics
Prediction for intermittent faults of ground air conditioning based on improved Apriori algorithm
CHEN Weixing, QU Rui, SUN Yigang
Journal of Computer Applications    2016, 36 (12): 3505-3510.   DOI: 10.11772/j.issn.1001-9081.2016.12.3505
Abstract707)      PDF (937KB)(358)       Save
Aiming at the problems caused by intermittent faults of ground air conditioning, including low use efficiency, maintenance lag etc., a prediction method of intermittent faults which combined re-association Array Summation (AS)-Apriori with clustering K-means was raised, based on this method, delayed maintenance forecast was realized. The low efficiency problem of frequently scanning transaction database in Apriori was solved in AS-Apriori algorithm, by constructing intermittent fault arrays and giving a summation of corresponding items on them in real-time. The goal of delayed maintenance forecast is to estimate the critical region of permanent fault to arrange reasonable maintenance, which can be realized by using Gaussian distribution for the solution of maintenance wave of different intermittent fault variables and delay probability and then giving an accumulation in order. The results show that, the operational efficiency is improved, the support degree of re-association rules is upgraded by 20.656 percentage points, and more accurate prediction of intermittent failure is realized. Moreover, according to the analysis of data, the probability of forecasting maintenance-wave and delay-probability is shown as a linear distribution, which means that the high predictability of intermittent faults is more convenient to maintain and manage in advance and the formation of permanent fault is reduced.
Reference | Related Articles | Metrics
Multisensor information fusion algorithm based on intelligent particle filtering
CHEN Weiqiang, CHEN Jun, ZHANG Chuang, SONG Liguo, TAN Zhuoli
Journal of Computer Applications    2016, 36 (12): 3358-3362.   DOI: 10.11772/j.issn.1001-9081.2016.12.3358
Abstract559)      PDF (733KB)(505)       Save
In order to solve the low-quality and degeneration problem of particles in the process of particle filtering, a multisensor information fusion algorithm based on intelligent particle filtering was proposed. The process of the proposed algorithm was divided into two steps. Firstly, the multisensor data was sent to the appropriate particle filtering calculation module, and the proposal distribution density was updated for the purpose of optimizing the particle distribution. Then, the integrated likelihood function model was structured by using the multisensor data in intelligent particle filtering module, meanwhile, the small-weight particles were modified into large-weight ones according to the designed genetic operators. The posterior distribution was more sufficiently approximated, thus large-weight particles were reserved in the process of resampling, which avoided the problem of exhausting particles, further maintained the diversity of the particles and improved the filtering precision. Finally, the optimal accurate estimated value was obtained. The proposed algorithm was applied to the GPS/SINS/LOG integrated navigation system according to the prototype testing data, and its effectiveness was verified by the simulation calculation. The simulation results show that, the proposed algorithm can get accurate informations of location, speed and heading, and effectively improve the filtering performance, which can improve the calculating precision of the integrated navigation system and meet the requirement of high precision navigation and positioning of the ship.
Reference | Related Articles | Metrics
Object recognition algorithm based on deep convolution neural networks
HUANG Bin, LU Jinjin, WANG Jianhua, WU Xingming, CHEN Weihai
Journal of Computer Applications    2016, 36 (12): 3333-3340.   DOI: 10.11772/j.issn.1001-9081.2016.12.3333
Abstract885)      PDF (1436KB)(1303)       Save
Focused on the problem of traditional object recognition algorithm that the artificially designed features were more susceptible to diversity of object shapes, illumination and background, a deep convolutional neural network algorithm was proposed for object recognition. Firstly, this algorithm was trained with NYU Depth V2 dataset, and single depth information was transformed into three channels. Then color images and transformed depth images in the training set were used to fine-tune two deep convolutional neural networks, respectively. Next, color and depth image features were extracted from the first fully connected layers of the two trained models, and the two features from the resampling training set were combined to train a Linear Support Vector Machine (LinSVM) classifier. Finally, the proposed object recognition algorithm was used to extract super-pixel features in scene understanding task. The proposed method can achieve a classification accuracy of 91.4% on the test set which is 4.1 percentage points higher than SAE-RNN (Sparse Auto-Encoder with the Recursive Neural Networks). The experimental results show that the proposed method is effective in extracting color and depth image features, and can effectively improve classification accuracy.
Reference | Related Articles | Metrics
Fourier representation, rendering techniques and applications of periodic dynamic images
LYU Ruimin, CHEN Wei, MENG Lei, CHEN Lifang, WU Haotian, LI Jingyuan
Journal of Computer Applications    2015, 35 (8): 2280-2284.   DOI: 10.11772/j.issn.1001-9081.2015.08.2280
Abstract431)      PDF (896KB)(314)       Save

In order to create novel artistic effects, a period-dynamic-image model was proposed, in which each element is a periodic function. Instead of using an array of color pixels to represent a digital image, a Fourier model was used to represent a periodic dynamic image as an array of functional pixels, and the output of each pixel was computed by a Fourier synthesis process. Then three applications with three rendering styles were put forward, including dynamic painting, dynamic distortion effects and dynamic speech balloons, to visually display the periodic dynamic images. A prototype system was constructed and a series of experiments were performed. The results demonstrate that the proposed method can effectively explore the novel artistic effects of periodic dynamic images, and it can be used as a new art media.

Reference | Related Articles | Metrics
Improved algorithm of time synchronization based on control perspective of wireless sensor network
ZENG Pei, CHEN Wei
Journal of Computer Applications    2015, 35 (10): 2852-2857.   DOI: 10.11772/j.issn.1001-9081.2015.10.2852
Abstract468)      PDF (906KB)(389)       Save
Focusing on the issue of low synchronization accuracy and slow convergence speed caused by the process of time synchronization which is susceptible to disturbance and communication delay in Wireless Sensor Network (WSN), an improved time synchronization algorithm was proposed from control perspective. Firstly, the clock synchronization state model was established, then, through the thought of modern control theory, a centralized control strategy was introduced, and the time synchronization state model based on control strategy was established. The centralized control scheme was designed based on global clock status information, and optimal control was obtained under the condition of minimizing the performance index function and the optimal estimation of Kalman filtering. Comparison simulation were carried out on the proposed clock synchronization optimization algorithm and Timing-sync Protocol for Sensor Networks (TPSN). The results showed that, from the 6th step of the clock synchronization, the synchronization error of the former was gradually smaller than that of the latter;when achieving the same relative high synchronization precision performances, the former required steps is about twenty percent of the latter;the variance of synchronization error convergence of the former was two orders lower than that of the latter. The results prove that the proposed algorithm of time synchronization has higher synchronization accuracy, faster convergence speed and lower communication load than TPSN.
Reference | Related Articles | Metrics
Single video temporal super-resolution reconstruction algorithm based on maximum a posterior
GUO Li LIAO Yu CHEN Weilong LIAO Honghua LI Jun XIANG Jun
Journal of Computer Applications    2014, 34 (12): 3580-3584.  
Abstract151)      PDF (823KB)(613)       Save

Any video camera equipment has certain temporal resolution, so it will cause motion blur and motion aliasing in captured video sequence. Spatial deblurring and temporal interpolation are usually adopted to solve this problem, but these methods can not solve it completely in origin. A temporal super-resolution reconstruction method based on Maximum A Posterior (MAP) probability estimation for single-video was proposed in this paper. The conditional probability model was determined in this method by reconstruction constraint, and then prior information model was established by combining temporal self-similarity in video itself. From these two models, estimation of maximum posteriori was obtained, namely reconstructed a high temporal resolution video through a single low temporal resolution video, so as to effectively remove motion blur for too long exposure time and motion aliasing for inadequate camera frame-rate. Through theoretical analysis and experiments, the validity of the proposed method is proved to be effective and efficient.

Reference | Related Articles | Metrics
Face recognition based on complete orthogonal neighbourhood preserving discriminant embedding
CHEN Dayao CHEN Weiqi CHEN Xiuhong
Journal of Computer Applications    2013, 33 (09): 2667-2670.   DOI: 10.11772/j.issn.1001-9081.2013.09.2667
Abstract758)      PDF (742KB)(406)       Save
In order to address Small Sample Size (SSS) problem encountered by Neighbourhood Preserving Discriminant Embedding (NPDE) and make full use of the discriminant information in the null space and non-null space of within-neighbourhood scatter matrix for face recognition, this paper proposed a Complete Orthogonal Neighbourhood Preserving Discriminant Embedding (CONPDE) algorithm for face recognition. The algorithm firstly removed the null space of the total neighbourhood scatter matrix using eigen decomposition method indirectly. Then, the optimal discriminant vectors were extracted in the null space and non-null space of within-neighbourhood scatter matrix, respectively. Besides, to further improve the recognition performance, the orthogonal projection matrix obtained based on economic QR decomposition was given. The experiments on ORL and Yale face database show the efficiency of the proposed method.
Related Articles | Metrics
Binary projection for image local descriptor
TANG Peikai CHEN Wei MAI Yicheng
Journal of Computer Applications    2013, 33 (04): 1096-1099.   DOI: 10.3724/SP.J.1087.2013.01096
Abstract793)      PDF (620KB)(537)       Save
In order to reduce the computational burden and maintain the recognition rate of the image local descriptor, a binary projection method for image local descriptor was proposed. The image patch was projected and transformed into a binary string for boosting the performance as well as speeding up the matching speed. The projection matrix was optimized by machine learning method to maintain its recognition rate and robustness. The experimental result indicates that only a 32-bit binary string is needed to perform as well as the state-of-art descriptors and it shows significantly faster matching speed.
Reference | Related Articles | Metrics
Finite element simulation of implant surgery for vocal cord paralysis
CHEN Weitao CHEN Dongfan HAN Xingqian ZHOU Chen GAO Xiang
Journal of Computer Applications    2013, 33 (03): 896-900.   DOI: 10.3724/SP.J.1087.2013.00896
Abstract725)      PDF (723KB)(427)       Save
As surgeons do not have effective prediction on the the implant surgery for vocal cord paralysis, resulting in high rate of failure, the finite element method was used for preoperative simulation. Through Computed Tomography (CT) data of larynx, the 3D geometric model of vocal cords and glottis trachea was extracted by Mimics, and then imported into ANSYS-Fluent to simulate the vocal vibration mode and airflow dynamic coupling characteristics before and after implanted surgery. The experimental data and clinical statistics data were compared to prove the feasibility of the finite element analysis techniques for implant surgery simulation of vocal cord paralysis. The experimental result can provide support for the design of surgery program.
Reference | Related Articles | Metrics
Markov-based survivability model for Web applications
QIN Zhiguang SONG Xu GENG Ji CHEN Wei
Journal of Computer Applications    2013, 33 (02): 400-403.   DOI: 10.3724/SP.J.1087.2013.00400
Abstract766)      PDF (637KB)(459)       Save
Current survivability models can hardly bring up a practical solution, nor reflect the properties of Web applications. Firstly, the properties of Web applications were analyzed, especially the differences between atomic Web applications and composite Web applications. Secondly, the mathematical model reflecting the invoking relationship of the atomic Web applications for the development of composite Web application was constructed. Lastly, a survivability model for atomic Web applications and a Markov-based survivability model for composite Web applications with regard to runtime environment were proposed. And on the basis of these models, a recovery approach for Web applications was given, when part or all of the functions failed in adverse environment. Besides, a case was analyzed using these models, and its recovery procedures were given, in which the high survivability was guaranteed.
Related Articles | Metrics
Reliable assurance model for distributed system survivability
GENG Ji CHEN Fei NIE Peng CHEN Wei QIN Zhi-guang
Journal of Computer Applications    2012, 32 (10): 2748-2751.   DOI: 10.3724/SP.J.1087.2012.02748
Abstract628)      PDF (619KB)(387)       Save
The cooperative rollback recovery mechanism based on checkpointing is an effective mechanism for the survivability of distributed system. The existing cooperative rollback recovery mechanism based on checkpointing presumes that the communication channel is reliable. However, this assumption is not always true in actual application scenarios. For the actual application scenarios of distributed system, a reliable assurance model for the survivability of distributed system was proposed, based on the checkpointing-based rollback recovery mechanism. Through the creation of redundant communication channel and process migration mechanism, the proposed model assures the survivability of distributed system in actual application scenarios where the communication channel is not reliable.
Reference | Related Articles | Metrics